19 research outputs found
Recommended from our members
Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability
© 2010-2012 IEEE. In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.This research work was supported by the EPSRC MAPTRAITS Project (Grant Ref: EP/K017500/1) and the EPSRC HARPS Project under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)
Inferring Student Engagement in Collaborative Problem Solving from Visual Cues
Automatic analysis of students' collaborative interactions in physical settings is an emerging problem with a wide range of applications in education. However, this problem has been proven to be challenging due to the complex, interdependent and dynamic nature of student interactions in real-world contexts. In this paper, we propose a novel framework for the classification of student engagement in open-ended, face-to-face collaborative problem-solving (CPS) tasks purely from video data. Our framework i) estimates body pose from the recordings of student interactions; ii) combines face recognition with a Bayesian model to identify and track students with a high accuracy; and iii) classifies student engagement leveraging a Team Long Short-Term Memory (Team LSTM) neural network model. This novel approach allows the LSTMs to capture dependencies among individual students in their collaborative interactions. Our results show that the Team LSTM significantly improves the performance as compared to the baseline method that takes individual student trajectories into account independently
Recommended from our members
Automatic Replication of Teleoperator Head Movements and Facial Expressions on a Humanoid Robot
Robotic telepresence aims to create a physical presence for a remotely located human (teleoperator) by reproducing their verbal and nonverbal behaviours (e.g. speech, gestures, facial expressions) on a robotic platform. In this work, we propose a novel teleoperation system that combines the replication of facial expressions of emotions (neutral, disgust, happiness, and surprise) and head movements on the fly on the humanoid robot Nao. Robots' expression of emotions is constrained by their physical and behavioural capabilities. As the Nao robot has a static face, we use the LEDs located around its eyes to reproduce the teleoperator expressions of emotions. Using a web camera, we computationally detect the facial action units and measure the head pose of the operator. The emotion to be replicated is inferred from the detected action units by a neural network. Simultaneously, the measured head motion is smoothed and bounded to the robot's physical limits by applying a constrained-state Kalman filter. In order to evaluate the proposed system, we conducted a user study by asking 28 participants to use the replication system by displaying facial expressions and head movements while being recorded by a web camera. Subsequently, 18 external observers viewed the recorded clips via an online survey and assessed the quality of the robot's replication of the participants' behaviours. Our results show that the proposed teleoperation system can successfully communicate emotions and head movements, resulting in a high agreement among the external observers (ICC_E = 0.91, ICC_HP = 0.72).This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref· EP/L00416X/1)
Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions
Engagement is crucial to designing intelligent systems that can adapt to the characteristics of their users. This paper focuses on automatic analysis and classification of engagement based on humans’ and robot’s personality profiles in a triadic human-human-robot interaction setting. More explicitly, we present a study that involves two participants interacting with a humanoid robot, and investigate how participants’ personalities can be used together with the robot’s personality to predict the engagement state of each participant. The fully automatic system is firstly trained to predict the Big Five personality traits of each participant by extracting individual and interpersonal features from their nonverbal behavioural cues. Secondly, the output of the personality prediction system is used as an input to the engagement classification system. Thirdly, we focus on the concept of “group engagement”, which we define as the collective engagement of the participants with the robot, and analyse the impact of similar and dissimilar personalities on the engagement classification. Our experimental results show that (i) using the automatically predicted personality labels for engagement classification yields an F-measure on par with using the manually annotated personality labels, demonstrating the effectiveness of the automatic personality prediction module proposed; (ii) using the individual and interpersonal features without utilising personality information is not sufficient for engagement classification, instead incorporating the participants’ and robot’s personalities with individual/interpersonal features increases engagement classification performance; and (iii) the best classification performance is achieved when the participants and the robot are extroverted, while the worst results are obtained when all are introverted.This work was performed within the Labex SMART project (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements d’Avenir programme under reference ANR-11-IDEX-0004-02. The work of Oya Celiktutan and Hatice Gunes is also funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref.: EP/L00416X/1).This is the author accepted manuscript. The final version is available from Institute of Electrical and Electronics Engineers via http://dx.doi.org/10.1109/ACCESS.2016.261452
Automatic detection of cognitive impairment with virtual reality
Cognitive impairment features in neuropsychiatric conditions and when undiagnosed can have a severe impact on the affected individual's safety and ability to perform daily tasks. Virtual Reality (VR) systems are increasingly being explored for the recognition, diagnosis and treatment of cognitive impairment. In this paper, we describe novel VR-derived measures of cognitive performance and show their correspondence with clinically-validated cognitive performance measures. We use an immersive VR environment called VStore where participants complete a simulated supermarket shopping task. People with psychosis (k=26) and non-patient controls (k=128) participated in the study, spanning ages 20-79 years. The individuals were split into two cohorts, a homogeneous non-patient cohort (k=99 non-patient participants) and a heterogeneous cohort (k=26 patients, k=29 non-patient participants). Participants' spatio-temporal behaviour in VStore is used to extract four features, namely, route optimality score, proportional distance score, execution error score, and hesitation score using the Traveling Salesman Problem and explore-exploit decision mathematics. These extracted features are mapped to seven validated cognitive performance scores, via linear regression models. The most statistically important feature is found to be the hesitation score. When combined with the remaining extracted features, the multiple linear regression model resulted in statistically significant results with R2 = 0.369, F-Stat = 7.158, p(F-Stat) = 0.000128
Personality perception of robot avatar tele-operators
© 2016 IEEE. Nowadays a significant part of human-human interaction takes place over distance. Tele-operated robot avatars, in which an operator's behaviours are portrayed by a robot proxy, have the potential to improve distance interaction, e.g., improving social presence and trust. However, having communication mediated by a robot changes the perception of the operator's appearance and behaviour, which have been shown to be used alongside vocal cues in judging personality. In this paper we present a study that investigates how robot mediation affects the way the personality of the operator is perceived. More specifically, we aim to investigate if judges of personality can be consistent in assessing personality traits, can agree with one another, can agree with operators' self-assessed personality, and shift their perceptions to incorporate characteristics associated with the robot's appearance. Our experiments show that (i) judges utilise robot appearance cues along with operator vocal cues to make their judgements, (ii) operators' arm gestures reproduced on the robot aid personality judgements, and (iii) how personality cues are perceived and evaluated through speech, gesture and robot appearance is highly operator-dependent. We discuss the implications of these results for both tele-operated and autonomous robots that aim to portray personality.This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1).This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/ 10.1109/HRI.2016.745174
Recommended from our members
Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement
In this paper we introduce a novel dataset, the Multimodal Human-Human-Robot-Interactions (MHHRI) dataset, with the aim of studying personality simultaneously in human-human interactions (HHI) and human-robot interactions (HRI) and its relationship with engagement. Multimodal data was collected during a controlled interaction study where dyadic interactions between two human participants and triadic interactions between two human participants and a robot took place with interactants asking a set of personal questions to each other. Interactions were recorded using two static and two dynamic cameras as well as two biosensors, and meta-data was collected by having participants fill in two types of questionnaires, for assessing their own personality traits and their perceived engagement with their partners (self labels) and for assessing personality traits of the other participants partaking in the study (acquaintance labels). As a proof of concept, we present baseline results for personality and engagement classification. Our results show that (i) trends in personality classification performance remain the same with respect to the self and the acquaintance labels across the HHI and HRI settings; (ii) for extroversion, the acquaintance labels yield better results as compared to the self labels; (iii) in general, multi-modality yields better performance for the classification of personality traits.This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)
Personality Perception of Robot Avatar Teleoperators in Solo and Dyadic Tasks
Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely represented by a robot that replicates their arm, head, and possible face movements. They have been shown to have a number of benefits over more traditional media such as phones or video calls. However, using a teleoperated humanoid as a communication medium inherently changes the appearance of the operator, and appearance-based stereotypes are used in interpersonal judgments (whether consciously or unconsciously). One such judgment that plays a key role in how people interact is personality. Hence, we have been motivated to investigate if and how using a robot avatar alters the perceived personality of teleoperators. To do so, we carried out two studies where participants performed 3 communication tasks, solo in study one and dyadic in study two, and were recorded on video both with and without robot mediation. Judges recruited using online crowdsourcing services then made personality judgments of the participants in the video clips. We observed that judges were able to make internally consistent trait judgments in both communication conditions. However, judge agreement was affected by robot mediation, although which traits were affected was highly task dependent. Our most important finding was that in dyadic tasks personality trait perception was shifted to incorporate cues relating to the robot’s appearance when it was used to communicate. Our findings have important implications for telepresence robot design and personality expression in autonomous robots.This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)
Learning to self-manage by intelligent monitoring, prediction and intervention
Despite the growing prevalence of multimorbidities, current digital self-management approaches still prioritise single conditions. The future of outof- hospital care requires researchers to expand their horizons; integrated assistive technologies should enable people to live their life well regardless of their chronic conditions. Yet, many of the current digital self-management technologies are not equipped to handle this problem. In this position paper, we suggest the solution for these issues is a model-aware and data-agnostic platform formed on the basis of a tailored self-management plan and three integral concepts - Monitoring (M) multiple information sources to empower Predictions (P) and trigger intelligent Interventions (I). Here we present our ideas for the formation of such a platform, and its potential impact on quality of life for sufferers of chronic conditions
A cloud-based robot system for long-term interaction: principles, implementation, lessons learned
Making the transition to long-term interaction with social-robot systems has been identified as one of the main challenges in human-robot interaction. This article identifies four design principles to address this challenge and applies them in a real-world implementation: cloud-based robot control, a modular design, one common knowledge base for all applications, and hybrid artificial intelligence for decision making and reasoning. The control architecture for this robot includes a common Knowledge-base (ontologies), Data-base, “Hybrid Artificial Brain” (dialogue manager, action selection and explainable AI), Activities Centre (Timeline, Quiz, Break and Sort, Memory, Tip of the Day, ), Embodied Conversational Agent (ECA, i.e., robot and avatar), and Dashboards (for authoring and monitoring the interaction). Further, the ECA is integrated with an expandable set of (mobile) health applications. The resulting system is a Personal Assistant for a healthy Lifestyle (PAL), which supports diabetic children with self-management and educates them on health-related issues (48 children, aged 6–14, recruited via hospitals in the Netherlands and in Italy). It is capable of autonomous interaction “in the wild” for prolonged periods of time without the need for a “Wizard-of-Oz” (up until 6 months online). PAL is an exemplary system that provides personalised, stable and diverse, long-term human-robot interaction